21 research outputs found

    A Saliency-Based Technique for Advertisement Layout Optimisation to Predict Customers’ Behaviour

    Get PDF
    Customer retail environments represent an exciting and challenging context to develop and put in place cutting-edge computer vision techniques for more engaging customer experiences. Visual attention is one of the aspects that play such a critical role in the analysis of customers behaviour on advertising campaigns continuously displayed in shops and retail environments. In this paper, we approach the optimisation of advertisement layout content, aiming to grab the audience’s visual attention more effectively. We propose a fully automatic method for the delivery of the most effective layout content configuration using saliency maps out of each possible set of images with a given grid layout. Visual Saliency deals with the identification of the most critical regions out of pictures from a perceptual viewpoint. We want to assess the feasibility of saliency maps as a tool for the optimisation of advertisements considering all possible permutations of images which compose the advertising campaign itself. We start by analysing advertising campaigns consisting of a given spatial layout and a certain number of images. We run a deep learning-based saliency model over all permutations. Noticeable differences among global and local saliency maps occur over different layout content out of the same images. The latter aspect suggests that each image gives its contribution to the global visual saliency because of its content and location within the given layout. On top of this consideration, we employ some advertising images to set up a graphical campaign with a given design. We extract relative variance values out the local saliency maps of all permutations. We hypothesise that the inverse of relative variance can be used as an Effectiveness Score (ES) to catch those layout content permutations showing the more balanced spatial distribution of salient pixel. A group of 20 participants have run some eye-tracking sessions over the same advertising layouts to validate the proposed method

    Study of accuracy profiles for different elements by ICP-AES and ICP-MS

    No full text
    International audienceThe accuracy profile is a graphical representation of accuracy depending on the concentration, that is to say the variation of both the accuracy (bias) and fidelity. This profile reflects the ability of the instrument to work with an accuracy tolerance in a concentration range given. Accuracy profiles represent a possible statistical tool for the interpretation of validation studies. The objective of this work was to examine the accuracy profile of different elements by ICP-AES and ICP-MS. This study is based on the use of reference materials and takes into account the internal repeatability and the inter-day reproducibility. This work enables access to the limits of quantification of various elements analysed and permits to evaluate the measurement uncertainties in the concentration range studied important parameters within the context of method validation. A comparison of the results obtained by the two techniques has allowed to highlight the performance of each device, since the method of the accuracy profile enables actually observe the capabilities of the instrument and method. Examples of results obtained in the validation method for different analytical framework programs will be presented to illustrate the theme of the accuracy profile. Accuracy profile proposes a graphical method for simultaneously establishing whether precision and trueness allow deciding that the analytical method is correctly adapted to a given goal

    Toward a head movement-based system for multilayer digital content exploration

    No full text
    In this article, we propose a novel technique based on Head Movement tracking to explore multilayer digital content. We extend an existing method by Kazemi et al. dealing with the extraction of facial landmarks to define the “head-gaze” of the user. We use the “head-gaze” to calculate the users' on-screen coordinates. Hovering the cursor over an interactive area for a given time threshold allows users to explore the next layer contents. Our experimental sessions allowed us to measure the technique's level of control and usability. Our results were promising, and users were able to interact with considerably small regions. Furthermore, our lightweight method can be used with a low-cost camera or webcam and a wide range of screen sizes and distances
    corecore